AI Deregulation Sweeps Both Sides of the Atlantic

Analysis

Both the U.S. and the EU are retreating from efforts to regulate the risks of AI. With growing economic nationalism and AI spending driving markets, the two sides are competing for AI innovation instead of collaborating to address AI risks and supporting trust in the technology. The Trump administration’s latest Executive Order on AI seeks to preempt state laws without a federal framework already in place, leaving a gap in regulation. Meanwhile, the EU is scaling back the AI Act. Big Tech stands to gain from the deregulatory pushes, while the populations of the U.S. and Europe absorb the risks.

young person surrounded by blue code

With the global AI race underway, the EU and the U.S. have moved to deregulate in an attempt to attract more investment and foster greater innovation. In the US, Trump has signed multiple Executive Orders (EOs) on AI, most recently one aiming to override, or preempt, state laws that seek to address some of AI’s risks. The EU on the other hand is considering a delay to implementation of its regulatory framework; the EU AI Act.

The U.S. and EU have thus far taken different approaches to regulating the risks of AI. The U.S. has primarily addressed AI through state laws with common regulatory themes including children safety and extending to deepfakes and intimate imagery, AI transparency, and limits to automated decision making. The EU’s AI Act outlines a more robust approach, with four risk categories each with different levels of safeguards.

The latest Executive Order

President Trump signed an EO on December 11th that aims to limit state AI laws in order to make way for a national policy framework. Despite the rapid growth of the AI industry, the Trump administration argues that state laws are paralyzing, inconsistent, and hinder innovation. Therefore, the EO establishes an AI Litigation Task Force to challenge state AI laws and limit federal funding for states whose regulations, including laws promoting diversity-based or anti-bias interventions in model training, diverge from the administration’s vision for AI.

While the EO, perhaps in an effort to appease conservative states, will leave some laws under some topics unchallenged, such as child safety and data center infrastructure, while it will seek to revoke protections for the majority of AI risks. As AI companies rapidly expand, creating the possibility of an economic bubble, many communities are already bearing the costs. Demand for energy from AI data centers undermines efforts to address climate change and disproportionately create hazards for low-income neighborhoods where they are built. AI can also erode data privacy, and contribute to violence, such as the widely reported cases of teenagers whose deaths were linked to harmful AI interactions. The EO overlooks a core reality: meaningful protection begins at the training stage through choices about data, objectives, and built-in safety constraints, protections provided by the laws the EO seeks to preempt, not solely through downstream rules or post-deployment controls on child safety that the EO leaves in place.

There are reasons to be skeptical of the EO’s goal of a national policy framework. The EO sets no deadline and gives no clear guidance for building a national AI framework, but establishes clear timelines for preempting state legislation. Additionally, the EO tasks David Sacks, a venture capitalist who has investments in AI and has sought de-regulation, with overseeing the development of the framework. These developments offer little confidence that timely and substantial federal AI regulation will materialize and instead mirror the Trump administration’s earlier push for a ten-year moratorium that would have preempted state AI laws without providing a clear federal framework or timeline to replace them.

The U.S. Context

President Trump’s latest efforts are part of a broader vision for AI – high-speed development, regardless of safety and ethics considerations. Last month Trump signed another EO that launched the Genesis Mission designed to secure American technological dominance in the ongoing global AI race, including through collaboration with Big Tech. The Administration uses grand metaphors, comparing the AI race to the Manhattan Project and Apollo Missions, in terms of both urgency and the status of the competition landscape. In October 2023, the Biden administration issued an ambitious executive order to build a federal framework for safe and responsible AI. While its impact was limited, it was the most substantial federal attempt to regulate AI. Shortly after taking office, President Trump revoked Biden’s AI order. Big Tech companies have pushed for AI deregulation, and have cozied up the White House, even helping pay for the new ballroom. Without meaningful regulation in place, dominant AI firms are able to further exploit people, extracting and monetizing personal data, deploying addictive and manipulative systems, and releasing models that expose users to well-documented harms with little accountability. 

Across the Atlantic: Simplification or deregulation? 

Across the Atlantic, the EU is undergoing its own process to de-regulate AI. The European Commission took a step back from its pioneer EU AI Act, in the same week that a draft version of Trump’s latest EO leaked. The Digital Omnibus Regulation Proposal included amendments to existing digital policies, including the AI Act. While EU officials publicly claim that no third party country influenced this decisionBig Tech substantially lobbied for the Digital Omnibus. Additionally, there are rumblings that the Trump administration and EU officials are discussing amendments to the AI Act. 


The Digital Omnibus would amend the General Data Protection Regulation (GDPR) to allow for personal data to be used in AI training and delay implementation of aspects of the AI Act, pending simplification and clarification. While a simplification in and of itself is not necessarily a bad idea, it is possible that the Commission’s plans would create a watered down version of the Act, that as MEP Alexandra Geese stated, would “dismantle the protection of European citizens for the benefit of U.S. tech giants". Proposed changes could allow greater use of EU citizens’ data to train AI systems, undermining data privacy protections, while expanded AI infrastructure, particularly data centers, could impose a downstream effect of long-term environmental costs on local communities. Although industry emphasizes potential job creation, AI growth without protections also risks workforce displacement. 

The EU Context

There has been more visible pushback in the EU than the U.S. to AI deregulation. Over 120 civil society groups signed a letter to the Commission criticizing the Digital Omnibus as “the biggest rollback of digital rights in EU history” under the guise of 'simplification.’ Among the concerns raised by civil society were the rollback of data privacy protections  and the expansion of unchecked surveillance, through the changes to GDPR. Additionally, advocates warned that weakening social and environmental protections could have serious consequences; as they noted, “together, these changes risk worsening working conditions, allowing dangerous chemicals into cosmetics, and polluting the air and water, leaving communities even more exposed to harm”.

The idea of simplification came up in response to the 2024 Draghi Report on EU competitiveness which warned that the EU was losing to the U.S. and China in the global race of innovation, particularly in AI. Draghi suggested that the EU AI Act be paused until AI threats are clearer and echoed what many companies argued, who were nervous about the turnaround time for implementation being too little, that implementation of the Act be delayed. The European Commission released its Competitiveness Compass in January 2025, designed to transform the Draghi Report’s recommendations into actions. What we have seen though, has not been a mere simplification, but rather a deregulatory push

The stakes are too high - deregulation cannot be the answer

Even some who have worked in the AI industry have acknowledged the need for regulation. Look to those like Geoffrey Hinton, the Godfather of AI who did groundbreaking AI development work and subsequently left industry so that he could speak freely, who has said the threats AI poses are very real and require strong regulation. Experts, like Hinton, warn that  we may soon see sweeping disruption to working-class jobs, major shifts in foreign policy and warfare tactics, and the emergence of systems whose intelligence could challenge human control. Meaningful regulation is essential if we want to give humanity the best chance of remaining in control and safeguarding people against the very real and imminent risks posed by advanced AI technologies. Moverover, the rollback of protections means we are prioritizing the interests of tech companies over people adapting to life with AI.

Regulation does not inherently stifle industry; when designed thoughtfully, it can foster trust, stability, competition, and sustainable innovation. In the context of AI, protections that put people first are not a barrier to progress but a prerequisite for widespread adoption. When users trust that the products they rely on are safe, respectful of their privacy, and developed with public interests in mind, they are far more likely to embrace them. AI-enabled surveillance and data extraction, the environmental toll of data centers, and chatbot-induced tragedies risks eroding that trust. Innovation, however, does not disappear under constraints; it adapts. From clean energy breakthroughs driven by environmental regulation to life-saving advances in automotive safety, history shows that guardrails often spur better, safer technologies. 

If the U.S. and EU hope to lead globally in AI, they must lead in governance as well—setting standards that protect people, limit social and environmental costs, and embrace  responsible innovation. Instead of framing AI leadership primarily through competition with China, there is an opportunity to strengthen strained alliances through common goals on AI to better serve their citizens and set global benchmarks. Coordinated leadership on AI could shape international norms, expand markets, and ensure democratic values guide the future of the technology.